Current Issue : July-September Volume : 2025 Issue Number : 3 Articles : 5 Articles
Recently, the IT industry has become larger, and cloud service has rapidly increased; thus cybersecurity to protect sensitive data from attacks has become an important factor. However, cloud services have become larger, making the surface area larger, and a complex cyber environment leads to difficulty managing and defending. With the rise of artificial intelligence, applying artificial intelligence to a cyber environment to automatically detect and respond to cyberattacks has begun to get attention. In order to apply artificial intelligence in cyber environments, a simulation framework that is easily applicable and can represent real situations well is needed. In this study, we introduce the framework Cyber Environment (CYE) that provides useful components that abstract complex and large cloud environments. Additionally, we use CYE to reproduce real-world situations into the scenario and apply reinforcement learning for training automated intelligence defense agents....
Data mining methods used as one of successfully potential solution against cyber risks, big data and electronic thread increasing faster in the last year's, so it forms a challenge to cyber security, data prediction used as one of the fundamental tools to predict cyber risks and improving security methods. Cyber security is one of important popular challenges in the current era where cyber-attacks and risks increasing, so fast it is very important to develop tools and techniques of cyber security, data mining techniques one of the important method used to solve cyber security problems. In this study we took support vector machine with random forest as prediction of risks tool then tested on a set of chosen data to detect attacks and risks on information and demonstrate the results, merging and make a comparison between them to gain best way for predicting of cyber security risks and determine the unusual data patterns that consider suspicious activity and improving the response to them....
Identification of cyber threats is crucial and significant for determining substantial security techniques and approaches. This research illustrates a brief discussion of cyberspace challenges and threats in a disruptive era alongside comprehensive approaches in mitigating the risk of cyber threats. Additionally, the aim of this research is to provide beneficial approaches on how to handle cyber threats in detail. For example, threats and attacks may be caused in the absence of legislation, ethical standardization, support system, and lack of access control. The governance system, therefore, will put a lot of effort into communicating, identifying, and enforcing the principles of security to moderate risk. The Metaheuristic algorithms are stimulated by the human brain, so implementing Artificial Intelligence (AI) that assists the Neural Network to mimic the behaviour of the human brain is important to predict significant outcomes. In this study, the author investigates and analyses the rapid growth of cyber threats to outline the solutions. The aim of this study is to contribute to cyber security optimization through implementing AI methodologies....
What makes safety claims about general purpose AI systems such as large language models trustworthy? We show that rather than the capabilities of security tools such as alignment and red teaming procedures, it is security practices based on these tools that contributed to reconfiguring the image of AI safety and made the claims acceptable. After showing what causes the gap between the capabilities of security tools and the desired safety guarantees, we critically investigate how AI security practices attempt to fill the gap and identify several shortcomings in diversity and participation. We found that these security practices are part of securitization processes aiming to support (commercial) development of general purpose AI systems whose trustworthiness can only be imperfectly tested instead of guaranteed. We conclude by offering several improvements to the current AI security practices....
The rapid proliferation of autonomous vehicles (AVs) presents a transformative technological leap with significant implications for safety, efficiency, and societal well-being. However, this advancement introduces a complex web of security vulnerabilities posing substantial risks to national security. This analysis explores the multifaceted security challenges inherent in AV software, examining their cascading effects on critical infrastructure, economic stability, and national defense. The core argument is that AV software vulnerabilities represent a significant and underappreciated national security threat. This threat manifests in several key ways. First, the interconnectedness of AVs—relying on communication networks (5G, V2X), mapping data, and real-time information sharing—creates a large attack surface. Cyberattacks can cause cascading failures, disrupting transportation networks and impacting supply chains. Second, the economic consequences of large-scale attacks are potentially devastating, encompassing vehicle recalls, repairs, insurance claims, and lost productivity leading to economic instability and national security concerns. Third, the integration of AV technology into military applications amplifies these risks. Autonomous military systems inherit and may amplify the vulnerabilities present in civilian AVs, jeopardizing national defense capabilities. Geopolitical implications are significant; statesponsored attacks and the potential for cyber warfare demand international cooperation in establishing cybersecurity norms and standards. Furthermore, the potential for AV misuse in terrorism or criminal activities, alongside data privacy concerns, increases the urgency of addressing these security challenges. Connected AVs collect vast amounts of data, presenting significant privacy risks and potential for exploitation. The analysis details various software vulnerabilities, including susceptibility to adversarial attacks on sensor systems (cameras, LiDAR, radar), vulnerabilities in machine-learning-based decision-making algorithms (adversarial examples, model poisoning, algorithmic bias), weaknesses in V2X communication protocols, risks associated with OTA updates, and supply chain security challenges. Hardware vulnerabilities further expand the attack surface. Mitigating these risks necessitates a multi-pronged approach: robust cybersecurity regulations and standards, investment in research and development of more resilient systems, and strong international cooperation. Implementing secure software development lifecycle practices, enhanced hardware security, transparent supply chains via SBOMs, and comprehensive incident response plans are crucial. Continuous research, development, and collaboration are essential for ensuring AVs' secure and responsible deployment, preventing significant consequences for national security and global stability....
Loading....